Goto

Collaborating Authors

 military operation


@Grok, Did Venezuela 'Deserve It'?

The Atlantic - Technology

The information war will be fought through chatbots. Hours before President Donald Trump announced Nicolás Maduro's capture, on Saturday morning, people had questions for Grok, Elon Musk's chatbot. Footage was circulating on X of explosions in Venezuela, and some users assumed the United States was responsible: "Hey @grok why is Trump sending US airstrikes to bomb Venezuela. Do you think they deserve it or not?"one "@grok what is the reason why America is bombing Venezuela," another asked.


Collateral Damage Assessment Model for AI System Target Engagement in Military Operations

Maathuis, Clara, Cools, Kasper

arXiv.org Artificial Intelligence

Abstract--In an era where AI (Artificial Intelligence) systems play an increasing role in the battlefield, ensuring responsible targeting demands rigorous assessment of potential collateral effects. In this context, a novel collateral damage assessment model for target engagement of AI systems in military operations is introduced. Its layered structure captures the categories and architectural components of the AI systems to be engaged together with corresponding engaging vectors and contextual aspects. At the same time, spreading, severity, likelihood, and evaluation metrics are considered in order to provide a clear representation enhanced by transparent reasoning mechanisms. Further, the model is demonstrated and evaluated through instantiation which serves as a basis for further dedicated efforts that aim at building responsible and trustworthy intelligent systems for assessing the effects produced by engaging AI systems in military operations.


Human-AI Teaming Co-Learning in Military Operations

Maathuis, Clara, Cools, Kasper

arXiv.org Artificial Intelligence

In a time of rapidly evolving military threats and increasingly complex operational environments, the integration of AI into military operations proves significant advantages. At the same time, this implies various challenges and risks regarding building and deploying human-AI teaming systems in an effective and ethical manner. Currently, understanding and coping with them are often tackled from an external perspective considering the human-AI teaming system as a collective agent. Nevertheless, zooming into the dynamics involved inside the system assures dealing with a broader palette of relevant multidimensional responsibility, safety, and robustness aspects. To this end, this research proposes the design of a trustworthy co-learning model for human-AI teaming in military operations that encompasses a continuous and bidirectional exchange of insights between the human and AI agents as they jointly adapt to evolving battlefield conditions. It does that by integrating four dimensions. First, adjustable autonomy for dynamically calibrating the autonomy levels of agents depending on aspects like mission state, system confidence, and environmental uncertainty. Second, multi-layered control which accounts continuous oversight, monitoring of activities, and accountability. Third, bidirectional feedback with explicit and implicit feedback loops between the agents to assure a proper communication of reasoning, uncertainties, and learned adaptations that each of the agents has. And fourth, collaborative decision-making which implies the generation, evaluation, and proposal of decisions associated with confidence levels and rationale behind them. The model proposed is accompanied by concrete exemplifications and recommendations that contribute to further developing responsible and trustworthy human-AI teaming systems in military operations.


How Signal's Meredith Whittaker Remembers SignalGate: 'No Fucking Way'

WIRED

The Signal Foundation president recalls where she was when she heard Trump cabinet officials had added a journalist to a highly sensitive group chat. In March of this year, Meredith Whittaker was at her kitchen table in Paris when Signal, the encrypted messaging service she runs, suddenly became an international headline . A colleague sent their group chat the story ricocheting across the globe: "The Trump Administration Accidentally Texted Me Its War Plans." Of course, you know the rest: In the piece, The Atlantic's editor in chief, Jeffrey Goldberg, detailed how he'd been added to a Signal chat about an upcoming military operation in Yemen. Over the following days and weeks, the incident would become known as " SignalGate "--and created a legitimate risk that the fallout would cause people to question Signal's security, instead of pointing their fingers at the profoundly dubious op-sec of senior-level Trump officials. In fact, Signal's user numbers grew by leaps and bounds, both in the US and around the world. It's growth that, Whittaker thinks, is coming at a time when "people are feeling in a much deeper, much more personal way why privacy might be important." On this week's episode of, I talked to Whittaker, who also cofounded the AI Now Institute, about the aftermath of SignalGate, the trajectory of artificial intelligence, and the tech industry's current relationship with politics. Nice to see you, Katie. Nice to see you, too. Brace yourself, we always start these conversations with a little warmup, so I'm going to ask you some very fast questions. I knew you were gonna say that. What's the weirdest AI application you've ever seen? A chatbot that pretends to be your friend.


Prepared, not paranoid: What you need to know to protect yourself from a possible terror attack

FOX News

Former FBI special agent Nicole Parker joins'Fox & Friends First' to discuss why the U.S. is on'high alert' for Iranian threats inside the country after U.S. airstrikes on three nuclear sites. In times like this, you hear the concern from your neighbors. You talk about it with people at the gym. It's the topic of conversation over morning coffee -- from small towns to big cities -- "Are we going to see an increase in terror attacks here at home?" Now, there are news that Iranian "sleeper cells" pose a dangerous threat. Such cells could carry out attacks on U.S. citizens in retaliation for recent military operations in Iran, it's understandable that Americans are feeling concerned for their safety here at home.


The AI Alignment Paradox

Communications of the ACM

The release of GPT-3, and later ChatGPT, catapulted large language models from the proceedings of computer science conferences to newspaper headlines across the globe, fueling their rise to one of today's most hyped technologies. The public's awe about GPT-3's knowledge and fluency was quickly blemished by concerns regarding its potential to radicalize, instigate, and misinform, for example, by stating that Bill Gates aimed to "kill billions of people with vaccines" or that Hillary Clinton was a "high-level satanic priestess."4 These shortcomings, in turn, have sparked a surge in research on AI alignment,7 a field aiming to "steer AI systems toward a person's or group's intended goals, preferences, and ethical principles" (definition by Wikipedia). A well-aligned AI system will "understand" what is "good" and what is "bad" and will do only the "good" while avoiding the "bad."a The resulting techniques, including instruction fine-tuning, reinforcement learning from human feedback, and so forth, have contributed in major ways to improving the output quality of large language models.


6 killed in Israeli drone strike on occupied West Bank's Jenin refugee camp

Al Jazeera

A Palestinian teenager and three brothers were among at least six people killed in an Israeli air attack on the Jenin refugee camp in the occupied West Bank, according to reports. The Palestinian news agency Wafa said that an Israeli drone fired three missiles at a group of people near a traffic roundabout in the camp on Tuesday evening, killing six people, including a 15-year-old boy, and injuring several others. Five other victims of the attack were aged between 23 and 34, and included three brothers, Wafa reports. Earlier this month, an Israeli drone strike on the occupied West Bank's Tammun town killed two Palestinian children and a 23-year-old from the same family. Al Jazeera's Hamdah Salhut said the drone strike on the Jenin camp comes amid intense Israeli military raids on local communities and the killing by Israeli forces of almost 800 Palestinians in the occupied West Bank since October 7, 2023 – as well as the arrest of several thousand others.


Leveraging Edge Intelligence and LLMs to Advance 6G-Enabled Internet of Automated Defense Vehicles

Onsu, Murat Arda, Lohan, Poonam, Kantarci, Burak

arXiv.org Artificial Intelligence

The evolution of Artificial Intelligence (AI) and its subset Deep Learning (DL), has profoundly impacted numerous domains, including autonomous driving. The integration of autonomous driving in military settings reduces human casualties and enables precise and safe execution of missions in hazardous environments while allowing for reliable logistics support without the risks associated with fatigue-related errors. However, relying on autonomous driving solely requires an advanced decision-making model that is adaptable and optimum in any situation. Considering the presence of numerous interconnected autonomous vehicles in mission-critical scenarios, Ultra-Reliable Low Latency Communication (URLLC) is vital for ensuring seamless coordination, real-time data exchange, and instantaneous response to dynamic driving environments. The advent of 6G strengthens the Internet of Automated Defense Vehicles (IoADV) concept within the realm of Internet of Military Defense Things (IoMDT) by enabling robust connectivity, crucial for real-time data exchange, advanced navigation, and enhanced safety features through IoADV interactions. On the other hand, a critical advancement in this space is using pre-trained Generative Large Language Models (LLMs) for decision-making and communication optimization for autonomous driving. Hence, this work presents opportunities and challenges with a vision of realizing the full potential of these technologies in critical defense applications, especially through the advancement of IoADV and its role in enhancing autonomous military operations.


Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems

Cools, Kasper, Maathuis, Clara

arXiv.org Artificial Intelligence

The integration of Autonomous Weapon Systems (AWS) into military operations presents both significant opportunities and challenges. This paper explores the multifaceted nature of trust in AWS, emphasising the necessity of establishing reliable and transparent systems to mitigate risks associated with bias, operational failures, and accountability. Despite advancements in Artificial Intelligence (AI), the trustworthiness of these systems, especially in high-stakes military applications, remains a critical issue. Through a systematic review of existing literature, this research identifies gaps in the understanding of trust dynamics during the development and deployment phases of AWS. It advocates for a collaborative approach that includes technologists, ethicists, and military strategists to address these ongoing challenges. The findings underscore the importance of Human-Machine teaming and enhancing system intelligibility to ensure accountability and adherence to International Humanitarian Law. Ultimately, this paper aims to contribute to the ongoing discourse on the ethical implications of AWS and the imperative for trustworthy AI in defense contexts.


Iran-backed proxy group threatens more attacks on US troops

FOX News

Joseph Votel discusses tensions in the Middle East and how the Biden administration could respond to a drone attack that killed three U.S. soldiers, on'The Story.' An Iran-backed militant group in Iraq has promised to continue attacks on U.S. troops after three American soldiers were killed by a drone strike in Jordan on Sunday. In a statement released Friday, Harakat al-Nujaba, one of the strongest Iraqi militias, announced that it plans to continue military operations against U.S. forces while allied factions have backed off their attacks after the Biden administration said there will be retaliation. Akram al-Kaabi, the group's leader, called for an end to the Israeli military operations in Gaza and withdrawal of the "American occupation of Iraq," in a statement posted on X. The announcement comes after Kataib Hezbollah, another powerful Iranian-backed Iraqi militia, which is closely monitored by the U.S. government, said on Tuesday that it would "suspend military and security operations against the occupying forces" to avoid embarrassing the Iraqi government.